may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.

Size: px
Start display at page:

Download "may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased."

Transcription

1 1 Evaluating estimators Suppose you observe data X 1,..., X n that are iid observations with distribution F θ indexed by some parameter θ. When trying to estimate θ, one may be interested in determining the properties of some estimator ˆθ of θ. In particular, the bias ) Bias(ˆθ) = E (ˆθ θ may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased. Another (possibly more important) property of an estimator is how close it tends to be to the truth on average. The most common choice for evaluating estimator precision is the mean squared error, ) MSE(ˆθ) = E ((ˆθ θ). When comparing a number of estimators, MSE is commonly used as a measure of quality. By directly using the identity that var(y ) = E(Y ) E(Y ), where the random variable Y = ˆθ θ, the above equation becomes MSE(ˆθ) = E (ˆθ θ ) + var(ˆθ θ) = Bias(ˆθ) + var(ˆθ) where the last line follows from the definition of bias and the fact that var(ˆθ θ) = var(ˆθ), since θ is a constant. For example, if X 1,..., X n are iid N(µ, σ ), then X N(µ, σ /n). So the bias of X as an estimator of µ is and the MSE is Bias(X) = E(X µ) = µ µ = 0 MSE(X) = 0 + var(x) = σ /n The above identity says that the precision of an estimator is a combination of the bias of that estimator and the variance. Therefore it is possible for a biased estimator to be more precise than an unbiased estimator if it is significantly less variable. This is known as the bias-variance tradeoff. We will see an example of this. 1. Using monte carlo to explore properties of estimators In some cases it can be difficult to explicitly calculate the MSE for an estimator. When this happens monte carlo can be a useful alternative to a very cumbersome mathematical calculation. The example below is an instance of this. Example: Suppose X 1,..., X n are iid N(θ, θ ) and we are interested in estimation of θ. Two reasonable estimators of θ are the sample mean ˆθ 1 = 1 n n i=1 X i and the sample

2 standard deviation ˆθ 1 = n n 1 i=1 (X i X). To compare these two estimators by monte carlo for a specific n and θ: 1. Generate X 1,..., X n N(θ, θ ). Calculate ˆθ 1 and ˆθ 3. Save (ˆθ 1 θ) and (ˆθ θ) 4. Repeat step 1-3 k times 5. Then the means of the (ˆθ 1 θ) s and (ˆθ θ) s, over the k replicates, are the monte carlo estimators of the MSEs of ˆθ 1 and ˆθ. This basic approach can be used any time you are comparing estimators by monte carlo. The larger we choose k to be, the more accurate these estimates are. We implement this in R with the following code for θ =.5,.6,.7,..., 10, n = 50, and k = k = 1000 n = 50 # Sequence of values of theta THETA <- seq(.5, 10, by=.1) # Storage for the MSEs of each estimator MSE <- matrix(0, length(theta), ) # Loop through the values in Theta for(j in 1:length(THETA)) { } # Generate the k datasets of size n D <- matrix(rnorm(k*n, mean=theta[j], sd=theta[j]), k, n) # Calculate theta_hat1 (sample mean) for each data set ThetaHat_1 <- apply(d, 1, mean) # Calculate theta_hat (sample sd) for each data set ThetaHat_ <- apply(d, 1, sd) # Save the MSEs MSE[j,1] <- mean( (ThetaHat_1 - THETA[j])^ ) MSE[j,] <- mean( (ThetaHat_ - THETA[j])^ ) # Plot the results on the same axes plot(theta, MSE[,1], xlab=quote(theta), ylab="mse", main=expression(paste("mse for each value of ", theta)), type="l", col=, cex.lab=1.3, cex.main=1.5)

3 MSE for each value of θ MSE θ Figure 1: Simulated values for the MSE of ˆθ 1 and ˆθ lines(theta, MSE[,], col=4) From the plot we can see that ˆθ, the sample standard deviation, is a uniformly better estimator of θ than ˆθ 1, the sample mean. We can verify this simulation mathematically. Clearly the sample mean s MSE is MSE(ˆθ 1 ) = θ /n The MSE for sample standard deviation is somewhat more difficult. It is well known that, in general, the sample variance from a normal population, V, is distributed so that (n 1)V σ χ n 1, where σ is the true variance. In this case ˆθ = V. The χ distribution with k degrees of freedom has density function p(x) = (1/)k/ Γ(k/) xk/ 1 e x/ where Γ is the gamma function. Using this we can derive the expected value of V :

4 ( ) σ E V = n 1 E σ = n 1 ( ) (n 1)V 0 σ (1/) n 1 x x((n 1)/) 1 e x/ dx which follows from the definition of expectation and the expression above for the χ density. The trick now is to rearrange terms and factor out constants properly so that the integrand become another χ density ( ) σ (1/) n 1 E V = n 1 0 x(n/) 1 e x/ dx σ = n 1 Γ(n/) (1/) n 1 0 Γ(n/) x(n/) 1 e x/ dx σ = n 1 Γ(n/) (1/) n 1 (1/) n/ (1/) n/ 0 Γ(n/) x(n/) 1 e x/ dx }{{} χ n density Now we know that the integral in the last line is 1, since it has the form of a χ density with n degrees of freedom. The rest is just simplifying constants: ( ) σ E V = n 1 Γ(n/) (1/) n 1 (1/) n/ σ = n 1 Γ(n/) Γ(n/) = n 1 Γ( n 1 )σ = n 1 Γ(n/) σ Therefore E(ˆθ ) = Γ(n/) n 1 Γ( n 1 )θ. So the bias is ( ) Bias(ˆθ ) = θ E(ˆθ ) = θ 1 n 1 Γ(n/) To calculate the variance of ˆθ we also need E(ˆθ ). ˆθ is the sample variance, which we know is an unbiased estimator of the variance, θ, so E(ˆθ ) = θ

5 so the variance of ˆθ is Finally, ( var(ˆθ ) = θ 1 ) n 1 Γ(n/) Γ( n 1 ) ( MSE(ˆθ ) = θ 1 n 1 Γ(n/) Γ( n 1 ) ) = θ (1 n 1 Γ(n/) ) + ( 1 ) n 1 Γ(n/) Γ( n 1 ) It is a fact that ( ) 1 n 1 Γ(n/) < 1/n for any n. This implies that MSE(ˆθ ) < MSE(ˆθ 1 ) for any n and any θ. We can check this derivation by plotting the MSEs and comparing with the simulation based MSEs: # for each Q[1] is Theta, and Q[] is n # MSE of theta_hat1 MSE1 <- function(q) (Q[1]^)/Q[] # MSE theta_hat MSE <- function(q) { theta <- Q[1]; n <- Q[]; G <- gamma(n/)/gamma( (n-1)/ ) bias <- theta * (1 - sqrt(/(n-1)) * G ) variance <- (theta^) * (1 - (/(n-1)) * G^ ) } return(bias^ + variance) # Grid of values for Theta for n=50 THETA <- cbind(matrix( seq(.5, 10, length=100), 100, 1 ), rep(50,100)) # Storage for MSE of thetahat1 (column 1) and thetahat (column ) MSE <- matrix(0, 100, ) # MSE of theta_hat1 for each theta MSE[,1] <- apply(theta, 1, MSE1)

6 MSE for each value of θ MSE θ Figure : True values for the MSE of ˆθ 1 and ˆθ # MSE of theta_hat for each theta MSE[,] <- apply(theta, 1, MSE) plot(theta[,1], MSE[,1], xlab=quote(theta), ylab="mse", main=expression(paste("mse for each value of ", theta)), type="l", col=, cex.lab=1.3, cex.main=1.5) lines(theta[,1], MSE[,], col=4) Clearly the conclusion is the same as the simulated case ˆθ has a lower MSE than ˆθ 1 for any value of θ, but it was far less complicated to show this by simulation. Exercise 1: Consider data X 1,..., X n iid N(µ, σ ) where we are interested in estimating σ and µ is unknown. Two possible estimators are: ˆθ 1 = 1 n n (X i X) i=1 and the conventional unbiased sample variance: ˆθ = 1 n 1 n (X i X) i=1 Estimate the MSE for each of these estimators when n = 15 for σ =.5,.6,..., 3 and evaluate which estimate is closer to the truth on average for each value of σ.

7 Properties of hypothesis tests Consider deciding between two competing statistical hypotheses H 0, the null hypothesis, and H 1, the alternative hypothesis based on data X 1,..., X n. A test statistic is a function of the data T = T (X 1,..., X n ) such that if T R α then you reject H 0, otherwise you do not. The space R α is called the rejection region and is chosen so that P (reject H 0 H 0 is true) = P (T R α H 0 is true) = α α is referred to as the level of the test, and is the probability of incorrectly rejecting H 0 ; α is typically chosen by the user;.05 is a common choice. For example, in a two-sided z-test of H 0 : µ = 0, when σ is known, the rejection region is R α = (, z α/ ) (z 1 α/, ) where z a denotes the a th quantile of a standard normal distribution. When α =.05 this yields the familiar rejection region (, 1.96) (1.96, ). A good hypothesis test is one that has for a small value of α, has a large Power, which is the probability of rejecting H 0 when H 0 is indeed false. When testing the hypothesis H 0 : θ = θ 0 for some specific null value θ 0, and θ true θ 0, the power is Power(θ true ) = P (T R α θ = θ true ) Some primary determinants of the power of a test are: The sample size The difference between the null value and the true value (generally referred to as effect size The variance in the observed data In many settings practioners are interested in either a) how far the true value of θ must be from θ 0 or b) for a fixed effect size, how large the sample size must be for the power to reach some nominal level, say 80%. Inquiries of this type are referred to as power analysis Example : Power of the two-sample z-test Suppose you observe X 1,..., X n iid N(µ X, σ ) and Y 1,..., Y m iid N(µ Y, σ ) where µ X, µ Y are unknown, and σ is known. We are interested in a two-sided test of the hypothesis H 0 : µ X µ Y = 0. A common statistic for testing such hypotheses is the z-statistic: ( ) n X Y T = σ It is well known that, under H 0, T has a standard normal distribution. It can be shown that, for any value of µ D = µ X µ Y, this test is the most powerful level α-level test of H 0. (Similarly, when the variances are unknown and the sample size/variances are potentially unequal, the students t-test is the most powerful α level tests of this null

8 hypothesis). µ D is the measure of effect size in this test, and Power(µ D ) is a monotonically increasing function. For example, if µ D is very small it intuitively that we would be less likely to reject H 0 than if µ D was large. We will investigate the power of the two-sample z-test for sample sizes n = 10, 0, 30, 40, 50 as a function of the true mean difference µ D. The larger the true σ the smaller the power will be (for a fixed n and µ D ), but we will not investigate this effect in this example. Each data set will be generated to have σ = 1, the two samples will have equal sizes, and α =.05. The basic algorithm is: 1. Generate datasets of the form X 1,..., X n N(0, 1), and Y 1,..., Y n N(µ D, 1).. Calculate T 3. Save I = I( T > z 1 α/ ) 4. Repeat k times 5. The mean of the k values of the I s is the monte carlo estimate of Power(µ D ). #alpha level alpha =.05 # number of simulation reps k < # sample sizes n <- 10*c(1:5) # the mu_d s mu_d <- seq(0,, by=.1) # storage for the estimated Powers Power <- matrix(0, length(mu_d), 5) for(i in 1:5) { for(j in 1:length(mu_D)) { # Generate k datasets of size n[i] X <- matrix( rnorm(n[i]*k), k, n[i]) Y <- matrix( rnorm(n[i]*k, mean=mu_d[j]), k, n[i]) # Get sample means for each of the k datasets Xmeans <- apply(x, 1, mean) Ymeans <- apply(y, 1, mean)

9 # Calculate the Z statistics T <- sqrt(n[i])*(xmeans - Ymeans)/sqrt() # Indicators of the z-statistics being # in the rejectin region I <- (abs(t) > qnorm(1-(alpha/))) # Save the estimated power Power[j,i] <- mean(i) } } plot(mu_d, Power[,1], xlab=quote(mu(d)), ylab=expression( paste("power(", mu(d), ")")), col=, cex.lab=1.3, cex.main=1.5, main=expression(paste("power(", mu(d), ") vs.", mu(d))), type="l" ) points(mu_d, Power[,1], col=); points(mu_d, Power[,], col=3) points(mu_d, Power[,3], col=4); points(mu_d, Power[,4], col=5) points(mu_d, Power[,5], col=6); lines(mu_d, Power[,], col=3) lines(mu_d, Power[,3], col=4); lines(mu_d, Power[,4], col=5) lines(mu_d, Power[,5], col=6); abline(h=alpha) legend(1.5,.3, c("n = 10", "n = 0", "n = 30", "n = 40", "n = 50"), pch=(1), col=c(:6), lty=1) It is actually straightforward to calculate the power of the two-sample z-test. If µ D is the true mean difference, then T is a standard normal random variable but shifted over by nµd /, since E(X Y ) = µ D. Letting Z denote a standard normal random variable, the power as a function of µ D is: Power(µ D ) = P ( T > z 1 α/ ) = 1 P ( ) z α/ T z 1 α/ = 1 P (z α/ Z + nµ D / ) z 1 α/ ( = 1 P z α/ nµ D / Z z 1 α/ nµ D / ) ( ( = 1 P Z z 1 α/ nµ D / ) ( P Z z α/ nµ D / )) ( = 1 Φ(z 1 α/ nµ D / ) Φ(z α/ nµ D / ) ) where Φ denotes the standard normal CDF. Notice that as n, lim n Φ(z 1 α/ nµ D / ) = Φ( ) = 0

10 Power(µ(D)) vs.µ(d) Power(µ(D)) n = 10 n = 0 n = 30 n = 40 n = µ(d) Figure 3: Simulated power of the two-sample z-test for sample sizes n = 10, 0, 30, 40, 50 and µ D ranging from 0 up to. and similarly for Φ(z α/ nµ D / ), therefore lim Power(µ D) = 1 n In other words, not matter how small µ D > 0 is, the power to detect it as significantly different from 0 goes to 1 as the sample size increases. To check this calculation we plot the theoretical power and compare it with the simulation: # alpha level alpha <-.05 # sample sizes n <- 10*c(1:5) # the mu_d s mu_d <- seq(0,, by=.1) # storage for the true Powers Power <- matrix(0, length(mu_d), 5) for(i in 1:5) { for(j in 1:length(mu_D)) {

11 Power(µ(D)) vs.µ(d) Power(µ(D)) n = 10 n = 0 n = 30 n = 40 n = µ(d) Figure 4: Theoretical power of the two-sample z-test for sample sizes n = 10, 0, 30, 40, 50 and µ D ranging from 0 up to. } } Power[j,i] <- 1 - ( pnorm( qnorm(1-alpha/) - sqrt(n[i])*mu_d[j]/sqrt() ) - pnorm( qnorm(alpha/) - sqrt(n[i])*mu_d[j]/sqrt() ) ) # plot the results plot(mu_d, Power[,1], xlab=quote(mu(d)), ylab=expression( paste("power(", mu(d), ")")), col=, cex.lab=1.3, cex.main=1.5, main=expression(paste("power(", mu(d), ") vs.", mu(d))), type="l" ) points(mu_d, Power[,1], col=); points(mu_d, Power[,], col=3) points(mu_d, Power[,3], col=4); points(mu_d, Power[,4], col=5) points(mu_d, Power[,5], col=6); lines(mu_d, Power[,], col=3) lines(mu_d, Power[,3], col=4); lines(mu_d, Power[,4], col=5) lines(mu_d, Power[,5], col=6); abline(h=alpha) legend(1.5,.3, c("n = 10", "n = 0", "n = 30", "n = 40", "n = 50"), pch=(1), col=c(:6), lty=1) We can see the theoretical calculation matches the simulation. In this case the power calculation is simple, but for most hypothesis tests, power calculations are intractable, so simulation based power analysis is the only option.

12 Exercise : Using a similar approach to the above, consider the same problem except X 1,..., X n N(µ X, σx ) and Y 1,..., Y n N(µ Y, σy ) (both equal sample size) where σ X, σ Y are not known and but are assumed to be equal. Use the statistic ( ) n X Y T = ˆσ X + ˆσ Y where ˆσ X and ˆσ Y are the unbiased sample variances from exercise 1 calculated for each set of data. Under H 0, T has a t-distribution with n degrees of freedom. Estimate the power of this test for sample sizes n = 10, 0, 30, 40, 50 and for the true µ X µ Y ranging from 0 up to. In this case the theoretical power calculation, although possible, is significantly more difficult.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

Point Estimation. Edwin Leuven

Point Estimation. Edwin Leuven Point Estimation Edwin Leuven Introduction Last time we reviewed statistical inference We saw that while in probability we ask: given a data generating process, what are the properties of the outcomes?

More information

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example... Chapter 4 Point estimation Contents 4.1 Introduction................................... 2 4.2 Estimating a population mean......................... 2 4.2.1 The problem with estimating a population mean

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

Chapter 7 - Lecture 1 General concepts and criteria

Chapter 7 - Lecture 1 General concepts and criteria Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question

More information

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state

More information

Chapter 8. Introduction to Statistical Inference

Chapter 8. Introduction to Statistical Inference Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a

More information

Computer Statistics with R

Computer Statistics with R MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []

More information

Review of key points about estimators

Review of key points about estimators Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics

More information

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y )) Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y

More information

Statistics and Probability

Statistics and Probability Statistics and Probability Continuous RVs (Normal); Confidence Intervals Outline Continuous random variables Normal distribution CLT Point estimation Confidence intervals http://www.isrec.isb-sib.ch/~darlene/geneve/

More information

Statistical analysis and bootstrapping

Statistical analysis and bootstrapping Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping

More information

Chapter 5. Statistical inference for Parametric Models

Chapter 5. Statistical inference for Parametric Models Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric

More information

1. Statistical problems - a) Distribution is known. b) Distribution is unknown.

1. Statistical problems - a) Distribution is known. b) Distribution is unknown. Probability February 5, 2013 Debdeep Pati Estimation 1. Statistical problems - a) Distribution is known. b) Distribution is unknown. 2. When Distribution is known, then we can have either i) Parameters

More information

Review of key points about estimators

Review of key points about estimators Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

The Two-Sample Independent Sample t Test

The Two-Sample Independent Sample t Test Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Exam 2 Spring 2015 Statistics for Applications 4/9/2015 18.443 Exam 2 Spring 2015 Statistics for Applications 4/9/2015 1. True or False (and state why). (a). The significance level of a statistical test is not equal to the probability that the null hypothesis

More information

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.

More information

Statistical estimation

Statistical estimation Statistical estimation Statistical modelling: theory and practice Gilles Guillot gigu@dtu.dk September 3, 2013 Gilles Guillot (gigu@dtu.dk) Estimation September 3, 2013 1 / 27 1 Introductory example 2

More information

Section 2.4. Properties of point estimators 135

Section 2.4. Properties of point estimators 135 Section 2.4. Properties of point estimators 135 The fact that S 2 is an estimator of σ 2 for any population distribution is one of the most compelling reasons to use the n 1 in the denominator of the definition

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased. Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic

More information

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial. Lecture 21,22, 23 Text: A Course in Probability by Weiss 8.5 STAT 225 Introduction to Probability Models March 31, 2014 Standard Sums of Whitney Huang Purdue University 21,22, 23.1 Agenda 1 2 Standard

More information

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial Lecture 23 STAT 225 Introduction to Probability Models April 4, 2014 approximation Whitney Huang Purdue University 23.1 Agenda 1 approximation 2 approximation 23.2 Characteristics of the random variable:

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.

More information

Sampling Distribution

Sampling Distribution MAT 2379 (Spring 2012) Sampling Distribution Definition : Let X 1,..., X n be a collection of random variables. We say that they are identically distributed if they have a common distribution. Definition

More information

Generating Random Numbers

Generating Random Numbers Generating Random Numbers Aim: produce random variables for given distribution Inverse Method Let F be the distribution function of an univariate distribution and let F 1 (y) = inf{x F (x) y} (generalized

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

Back to estimators...

Back to estimators... Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)

More information

Qualifying Exam Solutions: Theoretical Statistics

Qualifying Exam Solutions: Theoretical Statistics Qualifying Exam Solutions: Theoretical Statistics. (a) For the first sampling plan, the expectation of any statistic W (X, X,..., X n ) is a polynomial of θ of degree less than n +. Hence τ(θ) cannot have

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

MATH 3200 Exam 3 Dr. Syring

MATH 3200 Exam 3 Dr. Syring . Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Simulation Efficiency and an Introduction to Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University

More information

BIO5312 Biostatistics Lecture 5: Estimations

BIO5312 Biostatistics Lecture 5: Estimations BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and

More information

Estimating the Greeks

Estimating the Greeks IEOR E4703: Monte-Carlo Simulation Columbia University Estimating the Greeks c 207 by Martin Haugh In these lecture notes we discuss the use of Monte-Carlo simulation for the estimation of sensitivities

More information

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Examples

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Examples Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Examples M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

STA258 Analysis of Variance

STA258 Analysis of Variance STA258 Analysis of Variance Al Nosedal. University of Toronto. Winter 2017 The Data Matrix The following table shows last year s sales data for a small business. The sample is put into a matrix format

More information

8.1 Estimation of the Mean and Proportion

8.1 Estimation of the Mean and Proportion 8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population

More information

The Constant Expected Return Model

The Constant Expected Return Model Chapter 1 The Constant Expected Return Model The first model of asset returns we consider is the very simple constant expected return (CER)model.Thismodelassumesthatanasset sreturnover time is normally

More information

Modelling Returns: the CER and the CAPM

Modelling Returns: the CER and the CAPM Modelling Returns: the CER and the CAPM Carlo Favero Favero () Modelling Returns: the CER and the CAPM 1 / 20 Econometric Modelling of Financial Returns Financial data are mostly observational data: they

More information

Confidence Intervals Introduction

Confidence Intervals Introduction Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ

More information

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ. Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional

More information

Chapter 5: Statistical Inference (in General)

Chapter 5: Statistical Inference (in General) Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,

More information

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics CONTENTS Estimating parameters The sampling distribution Confidence intervals for μ Hypothesis tests for μ The t-distribution Comparison

More information

Chapter 7. Inferences about Population Variances

Chapter 7. Inferences about Population Variances Chapter 7. Inferences about Population Variances Introduction () The variability of a population s values is as important as the population mean. Hypothetical distribution of E. coli concentrations from

More information

Simulation Wrap-up, Statistics COS 323

Simulation Wrap-up, Statistics COS 323 Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up

More information

The Normal Distribution

The Normal Distribution The Normal Distribution The normal distribution plays a central role in probability theory and in statistics. It is often used as a model for the distribution of continuous random variables. Like all models,

More information

The Bernoulli distribution

The Bernoulli distribution This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

MTH6154 Financial Mathematics I Stochastic Interest Rates

MTH6154 Financial Mathematics I Stochastic Interest Rates MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data

More information

Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method

Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Meng-Jie Lu 1 / Wei-Hua Zhong 1 / Yu-Xiu Liu 1 / Hua-Zhang Miao 1 / Yong-Chang Li 1 / Mu-Huo Ji 2 Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Abstract:

More information

What was in the last lecture?

What was in the last lecture? What was in the last lecture? Normal distribution A continuous rv with bell-shaped density curve The pdf is given by f(x) = 1 2πσ e (x µ)2 2σ 2, < x < If X N(µ, σ 2 ), E(X) = µ and V (X) = σ 2 Standard

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS, STATISTICS & PHYSICS SEMESTER 1 SPECIMEN 2 MAS3904. Stochastic Financial Modelling. Time allowed: 2 hours

NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS, STATISTICS & PHYSICS SEMESTER 1 SPECIMEN 2 MAS3904. Stochastic Financial Modelling. Time allowed: 2 hours NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS, STATISTICS & PHYSICS SEMESTER 1 SPECIMEN 2 Stochastic Financial Modelling Time allowed: 2 hours Candidates should attempt all questions. Marks for each question

More information

A comment on Christoffersen, Jacobs and Ornthanalai (2012), Dynamic jump intensities and risk premiums: Evidence from S&P500 returns and options

A comment on Christoffersen, Jacobs and Ornthanalai (2012), Dynamic jump intensities and risk premiums: Evidence from S&P500 returns and options A comment on Christoffersen, Jacobs and Ornthanalai (2012), Dynamic jump intensities and risk premiums: Evidence from S&P500 returns and options Garland Durham 1 John Geweke 2 Pulak Ghosh 3 February 25,

More information

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x

More information

Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance

Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Prof. Tesler Math 186 Winter 2017 Prof. Tesler Ch. 5: Confidence Intervals, Sample Variance Math 186 / Winter 2017 1 / 29 Estimating parameters

More information

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 4: Asymptotic Properties of MLE (Part 3) Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to

More information

Standard Normal, Inverse Normal and Sampling Distributions

Standard Normal, Inverse Normal and Sampling Distributions Standard Normal, Inverse Normal and Sampling Distributions Section 5.5 & 6.6 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 9-3339 Cathy

More information

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 8-26-2016 On Some Test Statistics for Testing the Population Skewness and Kurtosis:

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

Improved Inference for Signal Discovery Under Exceptionally Low False Positive Error Rates

Improved Inference for Signal Discovery Under Exceptionally Low False Positive Error Rates Improved Inference for Signal Discovery Under Exceptionally Low False Positive Error Rates (to appear in Journal of Instrumentation) Igor Volobouev & Alex Trindade Dept. of Physics & Astronomy, Texas Tech

More information

Lecture 22. Survey Sampling: an Overview

Lecture 22. Survey Sampling: an Overview Math 408 - Mathematical Statistics Lecture 22. Survey Sampling: an Overview March 25, 2013 Konstantin Zuev (USC) Math 408, Lecture 22 March 25, 2013 1 / 16 Survey Sampling: What and Why In surveys sampling

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Probability. An intro for calculus students P= Figure 1: A normal integral

Probability. An intro for calculus students P= Figure 1: A normal integral Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided

More information

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09 Econ 300: Quantitative Methods in Economics 11th Class 10/19/09 Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write. --H.G. Wells discuss test [do

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Section 2: Estimation, Confidence Intervals and Testing Hypothesis

Section 2: Estimation, Confidence Intervals and Testing Hypothesis Section 2: Estimation, Confidence Intervals and Testing Hypothesis Carlos M. Carvalho The University of Texas at Austin McCombs School of Business http://faculty.mccombs.utexas.edu/carlos.carvalho/teaching/

More information

Learning From Data: MLE. Maximum Likelihood Estimators

Learning From Data: MLE. Maximum Likelihood Estimators Learning From Data: MLE Maximum Likelihood Estimators 1 Parameter Estimation Assuming sample x1, x2,..., xn is from a parametric distribution f(x θ), estimate θ. E.g.: Given sample HHTTTTTHTHTTTHH of (possibly

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.

More information

MVE051/MSG Lecture 7

MVE051/MSG Lecture 7 MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for

More information

Statistical Methodology. A note on a two-sample T test with one variance unknown

Statistical Methodology. A note on a two-sample T test with one variance unknown Statistical Methodology 8 (0) 58 534 Contents lists available at SciVerse ScienceDirect Statistical Methodology journal homepage: www.elsevier.com/locate/stamet A note on a two-sample T test with one variance

More information

Tutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017

Tutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017 Tutorial 11: Limit Theorems Baoxiang Wang & Yihan Zhang bxwang, yhzhang@cse.cuhk.edu.hk April 10, 2017 1 Outline The Central Limit Theorem (CLT) Normal Approximation Based on CLT De Moivre-Laplace Approximation

More information

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties Posterior Inference Example. Consider a binomial model where we have a posterior distribution for the probability term, θ. Suppose we want to make inferences about the log-odds γ = log ( θ 1 θ), where

More information

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according STAT 345 Spring 2018 Homework 9 - Point Estimation Name: Please adhere to the homework rules as given in the Syllabus. 1. Mean Squared Error. Suppose that X 1, X 2 and X 3 are independent random variables

More information

Estimation after Model Selection

Estimation after Model Selection Estimation after Model Selection Vanja M. Dukić Department of Health Studies University of Chicago E-Mail: vanja@uchicago.edu Edsel A. Peña* Department of Statistics University of South Carolina E-Mail:

More information

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved. 4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which

More information

1 Introduction 1. 3 Confidence interval for proportion p 6

1 Introduction 1. 3 Confidence interval for proportion p 6 Math 321 Chapter 5 Confidence Intervals (draft version 2019/04/15-13:41:02) Contents 1 Introduction 1 2 Confidence interval for mean µ 2 2.1 Known variance................................. 3 2.2 Unknown

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2019 Last Time: Markov Chains We can use Markov chains for density estimation, d p(x) = p(x 1 ) p(x }{{}

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

More information

Populations and Samples Bios 662

Populations and Samples Bios 662 Populations and Samples Bios 662 Michael G. Hudgens, Ph.D. mhudgens@bios.unc.edu http://www.bios.unc.edu/ mhudgens 2008-08-22 16:29 BIOS 662 1 Populations and Samples Random Variables Random sample: result

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2018 Last Time: Markov Chains We can use Markov chains for density estimation, p(x) = p(x 1 ) }{{} d p(x

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Further Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Outline

More information

Contents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1

Contents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1 Math 321 Chapter 5 Confidence Intervals (draft version 2019/04/11-11:17:37) Contents 1 Introduction 1 2 Confidence interval for mean µ 2 2.1 Known variance................................. 2 2.2 Unknown

More information

The Vasicek Distribution

The Vasicek Distribution The Vasicek Distribution Dirk Tasche Lloyds TSB Bank Corporate Markets Rating Systems dirk.tasche@gmx.net Bristol / London, August 2008 The opinions expressed in this presentation are those of the author

More information

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE) CSE 312 Winter 2017 Learning From Data: Maximum Likelihood Estimators (MLE) 1 Parameter Estimation Given: independent samples x1, x2,..., xn from a parametric distribution f(x θ) Goal: estimate θ. Not

More information

Resampling Methods. Exercises.

Resampling Methods. Exercises. Aula 5. Monte Carlo Method III. Exercises. 0 Resampling Methods. Exercises. Anatoli Iambartsev IME-USP Aula 5. Monte Carlo Method III. Exercises. 1 Bootstrap. The use of the term bootstrap derives from

More information

ELEMENTS OF MONTE CARLO SIMULATION

ELEMENTS OF MONTE CARLO SIMULATION APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the

More information

Section 2: Estimation, Confidence Intervals and Testing Hypothesis

Section 2: Estimation, Confidence Intervals and Testing Hypothesis Section 2: Estimation, Confidence Intervals and Testing Hypothesis Tengyuan Liang, Chicago Booth https://tyliang.github.io/bus41000/ Suggested Reading: Naked Statistics, Chapters 7, 8, 9 and 10 OpenIntro

More information

Simple Random Sampling. Sampling Distribution

Simple Random Sampling. Sampling Distribution STAT 503 Sampling Distribution and Statistical Estimation 1 Simple Random Sampling Simple random sampling selects with equal chance from (available) members of population. The resulting sample is a simple

More information

John Hull, Risk Management and Financial Institutions, 4th Edition

John Hull, Risk Management and Financial Institutions, 4th Edition P1.T2. Quantitative Analysis John Hull, Risk Management and Financial Institutions, 4th Edition Bionic Turtle FRM Video Tutorials By David Harper, CFA FRM 1 Chapter 10: Volatility (Learning objectives)

More information